AAAI.2022 - Senior Member Presentation

Total: 16

#1 Toward a New Science of Common Sense [PDF] [Copy] [Kimi]

Authors: Ronald J. Brachman ; Hector J. Levesque

Common sense has always been of interest in AI, but has rarely taken center stage. Despite its mention in one of John McCarthy's earliest papers and years of work by dedicated researchers, arguably no AI system with a serious amount of general common sense has ever emerged. Why is that? What's missing? Examples of AI systems' failures of common sense abound, and they point to AI's frequent focus on expertise as the cause. Those attempting to break the brittleness barrier, even in the context of modern deep learning, have tended to invest their energy in large numbers of small bits of commonsense knowledge. But all the commonsense knowledge fragments in the world don't add up to a system that actually demonstrates common sense in a human-like way. We advocate examining common sense from a broader perspective than in the past. Common sense is more complex than it has been taken to be and is worthy of its own scientific exploration.

#2 Local Justice and the Algorithmic Allocation of Scarce Societal Resources [PDF] [Copy] [Kimi]

Author: Sanmay Das

AI is increasingly used to aid decision-making about the allocation of scarce societal resources, for example housing for homeless people, organs for transplantation, and food donations. Recently, there have been several proposals for how to design objectives for these systems that attempt to achieve some combination of fairness, efficiency, incentive compatibility, and satisfactory aggregation of stakeholder preferences. This paper lays out possible roles and opportunities for AI in this domain, arguing for a closer engagement with the political philosophy literature on local justice, which provides a framework for thinking about how societies have over time framed objectives for such allocation problems. It also discusses how we may be able to integrate into this framework the opportunities and risks opened up by the ubiquity of data and the availability of algorithms that can use them to make accurate predictions about the future.

#3 Training on the Test Set: Mapping the System-Problem Space in AI [PDF] [Copy] [Kimi]

Authors: José Hernández-Orallo ; Wout Schellaert ; Fernando Martínez-Plumed

Many present and future problems associated with artificial intelligence are not due to its limitations, but to our poor assessment of its behaviour. Our evaluation procedures produce aggregated performance metrics that lack detail and quantified uncertainty about the following question: how will an AI system, with a particular profile \pi, behave for a new problem, characterised by a particular situation \mu? Instead of just aggregating test results, we can use machine learning methods to fully capitalise on this evaluation information. In this paper, we introduce the concept of an assessor model, \hat{R}(r|\pi,\mu), a conditional probability estimator trained on test data. We discuss how these assessors can be built by using information of the full system-problem space and illustrate a broad range of applications that derive from varied inferences and aggregations from \hat{R}. Building good assessor models will change the predictive and explanatory power of AI evaluation and will lead to new research directions for building and using them. We propose accompanying every deployed AI system with its own assessor.

#4 Symbols as a Lingua Franca for Bridging Human-AI Chasm for Explainable and Advisable AI Systems [PDF] [Copy] [Kimi]

Authors: Subbarao Kambhampati ; Sarath Sreedharan ; Mudit Verma ; Yantian Zha ; Lin Guan

Despite the surprising power of many modern AI systems that often learn their own representations, there is significant discontent about their inscrutability and the attendant problems in their ability to interact with humans. While alternatives such as neuro-symbolic approaches have been proposed, there is a lack of consensus on what they are about. There are often two independent motivations (i) symbols as a lingua franca for human-AI interaction and (ii) symbols as (system-produced) abstractions use in its internal reasoning. The jury is still out on whether AI systems will need to use symbols in their internal reasoning to achieve general intelligence capabilities. Whatever the answer there is, the need for (human-understandable) symbols in human-AI interaction seems quite compelling. Symbols, like emotions, may well not be sine qua non for intelligence per se, but they will be crucial for AI systems to interact with us humans--as we can neither turn off our emotions not get by without our symbols. In particular, in many human-designed domains, humans would be interested in providing explicit (symbolic) knowledge and advice--and expect machine explanations in kind. This alone requires AI systems to at least do their I/O in symbolic terms. In this blue sky paper, we argue this point of view, and discuss research directions that need to be pursued to allow for this type of human-AI interaction.

#5 The Computational Gauntlet of Human-Like Learning [PDF] [Copy] [Kimi]

Author: Pat Langley

In this paper, I pose a major challenge for AI researchers: to develop systems that learn in a human-like manner. I briefly review the history of machine learning, noting that early work made close contact with results from cognitive psychology but that this is no longer the case. I identify seven characteristics of human behavior that, if reproduced, would offer better ways to acquire expertise than statistical induction over massive training sets. I illustrate these points with two domains - mathematics and driving - where people are effective learners and review systems that address them. In closing, I suggest ways to encourage more research on human-like learning.

#6 BabelNet Meaning Representation: A Fully Semantic Formalism to Overcome Language Barriers [PDF] [Copy] [Kimi]

Authors: Roberto Navigli ; Rexhina Blloshmi ; Abelardo Carlos Martínez Lorenzo

Conceptual representations of meaning have long been the general focus of Artificial Intelligence (AI) towards the fundamental goal of machine understanding, with innumerable efforts made in Knowledge Representation, Speech and Natural Language Processing, Computer Vision, inter alia. Even today, at the core of Natural Language Understanding lies the task of Semantic Parsing, the objective of which is to convert natural sentences into machine-readable representations. Through this paper, we aim to revamp the historical dream of AI, by putting forward a novel, all-embracing, fully semantic meaning representation, that goes beyond the many existing formalisms. Indeed, we tackle their key limits by fully abstracting text into meaning and introducing language-independent concepts and semantic relations, in order to obtain an interlingual representation. Our proposal aims to overcome the language barrier, and connect not only texts across languages, but also images, videos, speech and sound, and logical formulas, across many fields of AI.

#7 Expert-Informed, User-Centric Explanations for Machine Learning [PDF] [Copy] [Kimi]

Authors: Michael Pazzani ; Severine Soltani ; Robert Kaufman ; Samson Qian ; Albert Hsiao

We argue that the dominant approach to explainable AI for explaining image classification, annotating images with heatmaps, provides little value for users unfamiliar with deep learning. We argue that explainable AI for images should produce output like experts produce when communicating with one another, with apprentices, and with novices. We provide an expanded set of goals of explainable AI systems and propose a Turing Test for explainable AI.

#8 Subjective Attributes in Conversational Recommendation Systems: Challenges and Opportunities [PDF] [Copy] [Kimi]

Authors: Filip Radlinski ; Craig Boutilier ; Deepak Ramachandran ; Ivan Vendrov

The ubiquity of recommender systems has increased the need for higher-bandwidth, natural and efficient communication with users. This need is increasingly filled by recommenders that support natural language interaction, often conversationally. Given the inherent semantic subjectivity present in natural language, we argue that modeling subjective attributes in recommenders is a critical, yet understudied, avenue of AI research. We propose a novel framework for understanding different forms of subjectivity, examine various recommender tasks that will benefit from a systematic treatment of subjective attributes, and outline a number of research challenges.

#9 Market Design for Drone Traffic Management [PDF] [Copy] [Kimi]

Authors: Sven Seuken ; Paul Friedrich ; Ludwig Dierks

The rapid development of drone technology is leading to more and more use cases being proposed. In response, regulators are drawing up drone traffic management frameworks. However, to design solutions that are efficient, fair, simple, non-manipulable, and scalable, we need market design and AI expertise. To this end, we introduce the drone traffic management problem as a new research challenge to the market design and AI communities. We present five design desiderata that we have derived from our interviews with stakeholders from the regulatory side as well as from public and private enterprises. Finally, we provide an overview of the solution space to point out possible directions for future research.

#10 Consent as a Foundation for Responsible Autonomy [PDF] [Copy] [Kimi]

Author: Munindar P. Singh

This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time. That is, it considers settings where decision making by agents impinges upon the outcomes perceived by other agents. For an agent to act responsibly, it must accommodate the desires and other attitudes of its users and, through other agents, of their users. The contribution of this paper is twofold. First, it provides a conceptual analysis of consent, its benefits and misuses, and how understanding consent can help achieve responsible autonomy. Second, it outlines challenges for AI (in particular, for agents and multiagent systems) that merit investigation to form as a basis for modeling consent in multiagent systems and applying consent to achieve responsible autonomy.

#11 Matching Market Design with Constraints [PDF] [Copy] [Kimi]

Authors: Haris Aziz ; Péter Biró ; Makoto Yokoo

Two-sided matching is an important research area that has had a major impact on the design of real-world matching markets. One consistent feature in many of the real-world applications is that they impose new feasibility constraints that lead to research challenges. We survey developments in the field of two-sided matching with various constraints, including those based on regions, diversity, multi-dimensional capacities, and matroids.

#12 Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey [PDF] [Copy] [Kimi]

Authors: Prajjwal Bhargava ; Vincent Ng

While commonsense knowledge acquisition and reasoning has traditionally been a core research topic in the knowledge representation and reasoning community, recent years have seen a surge of interest in the natural language processing community in developing pre-trained models and testing their ability to address a variety of newly designed commonsense knowledge reasoning and generation tasks. This paper presents a survey of these tasks, discusses the strengths and weaknesses of state-of-the-art pre-trained models for commonsense reasoning and generation as revealed by these tasks, and reflects on future research directions.

#13 Target Languages (vs. Inductive Biases) for Learning to Act and Plan [PDF] [Copy] [Kimi]

Author: Hector Geffner

Recent breakthroughs in AI have shown the remarkable power of deep learning and deep reinforcement learning. These developments, however, have been tied to specific tasks, and progress in out-of-distribution generalization has been limited. While it is assumed that these limitations can be overcome by incorporating suitable inductive biases, the notion of inductive biases itself is often left vague and does not provide meaningful guidance. In the paper, I articulate a different learning approach where representations do not emerge from biases in a neural architecture but are learned over a given target language with a known semantics. The basic ideas are implicit in mainstream AI where representations have been encoded in languages ranging from fragments of first-order logic to probabilistic structural causal models. The challenge is to learn from data the representations that have traditionally been crafted by hand. Generalization is then a result of the semantics of the language. The goals of this paper are to make these ideas explicit, to place them in a broader context where the design of the target language is crucial, and to illustrate them in the context of learning to act and plan. For this, after a general discussion, I consider learning representations of actions, general policies, and subgoals ("intrinsic rewards"). In these cases, learning is formulated as a combinatorial problem but nothing prevents the use of deep learning techniques instead. Indeed, learning representations over languages with a known semantics provides an account of what is to be learned, while learning representations with neural nets provides a complementary account of how representations can be learned. The challenge and the opportunity is to bring the two together.

#14 Model-Based Diagnosis of Multi-Agent Systems: A Survey [PDF] [Copy] [Kimi]

Authors: Meir Kalech ; Avraham Natan

As systems involving multiple agents are increasingly deployed, there is a growing need to diagnose failures in such systems. Model-Based Diagnosis (MBD) is a well-known AI technique to diagnose faults in systems. In this approach, a model of the diagnosed system is given, and the real system is observed. A failure is announced when the real system's output contradicts the model's expected output. The model is then used to deduce the defective components that explain the unexpected observation. MBD has been increasingly being deployed in distributed and multi-agent systems. In this survey, we summarize twenty years of research in the field of model-based diagnosis algorithms for MAS diagnosis. We depict three attributes that should be considered when examining MAS diagnosis: (1) The objective of the diagnosis. Either diagnosing faults in the MAS plans or diagnosing coordination faults. (2) Centralized vs. distributed. The diagnosis method could be applied either by a centralized agent or by the agents in a distributed manner. (3) Temporal vs. non-temporal. Temporal diagnosis is used to diagnose the MAS's temporal behaviors, whereas non-temporal diagnosis is used to diagnose the conduct based on a single observation. We survey diverse studies in MBD of MAS based on these attributes, and provide novel research challenges in this field for the AI community.

#15 Delivering Trustworthy AI through Formal XAI [PDF] [Copy] [Kimi]

Authors: Joao Marques-Silva ; Alexey Ignatiev

The deployment of systems of artificial intelligence (AI) in high-risk settings warrants the need for trustworthy AI. This crucial requirement is highlighted by recent EU guidelines and regulations, but also by recommendations from OECD and UNESCO, among several other examples. One critical premise of trustworthy AI involves the necessity of finding explanations that offer reliable guarantees of soundness. This paper argues that the best known eXplainable AI (XAI) approaches fail to provide sound explanations, or that alternatively find explanations which can exhibit significant redundancy. The solution to these drawbacks are explanation approaches that offer formal guarantees of rigor. These formal explanations are not only sound but guarantee irredundancy. This paper summarizes the recent developments in the emerging discipline of formal XAI. The paper also outlines existing challenges for formal XAI.

#16 Anatomizing Bias in Facial Analysis [PDF] [Copy] [Kimi]

Authors: Richa Singh ; Puspita Majumdar ; Surbhi Mittal ; Mayank Vatsa

Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups. Due to its impact on society, it has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals. This has led to research in the identification and mitigation of bias in AI systems. In this paper, we encapsulate bias detection/estimation and mitigation algorithms for facial analysis. Our main contributions include a systematic review of algorithms proposed for understanding bias, along with a taxonomy and extensive overview of existing bias mitigation algorithms. We also discuss open challenges in the field of biased facial analysis.